JsPsychR

Open source, standard tooling for experimental protocols: towards Registered reports

Gorka Navarrete and Herman Valencia

Running experiments

Old school:

  1. Read a bit
  2. Have an idea
  3. Prepare experiment
  4. Run experiment
  5. Analyze experiment
  6. Write paper

Garden of forking paths

Rubin (2017)

Experimenter degrees of freedom, incentives, issues

  • What if… -> Garden of forking paths

  • p-hacking

  • The need for significance and novelty

  • False positives research

Registered reports (RR)



RRs were conceived to alter the incentives for authors and journals away from producing novel, positive, clean findings and towards conducting and publishing rigorous research on important questions. Soderberg et al. (2021)

How RR work

  • Write introduction, method, … before collecting data!
  • Send to journal for review
  • Revise and resubmit (improve before collecting data)
  • Once you get In principle acceptance, collect human data, run analysis, write up, and send for a final review

RR advantages

  • More open, preregistered, reproducible by default

  • It does not matter if p value is < 0.05

  • Less incentives for p-hacking

  • More trustworthy results

  • You still can explore, but have to say explicitly

Registered reports are great

But isn’t this a bit…



  • Before having the data available, it is hard to know how to analyze it
  • There are always surprises when receiving new data. How can I create an analysis plan?

Our path towards RR



Background

We (CSCN; ~5-10 PI’s) used different technologies to develop experiments: Psychopy, Qualtrics, Limesurvey, jsPsych, etc.

Each of these has advantages and disadvantages.

Mostly, pragmatic aspects guided the decision: lab history and resources, coding experience, type of experiment (EEG/behavioral, lab/online), …

Issues

Each protocol started almost from scratch. Sometimes a single task would define the technology used.

At some point, we had multiple implementations of the same tasks in different technologies, not always exact replicas.

Some would work in certain computers, other did not. Output data wildly different.

Issues Survey



  • Experiments
  • Resources
  • Reproducibility

Experiment issues

  • Errors in experiments already run
  • Errors in items coding
  • Data not what we expected
  • Data preparation is hard
  • Match between hypotheses and data not clear
  • Variables or questions not used in the analysis/paper

Resources issues: project as islands

  • Hours wasted re-programming tasks
  • Thousands of € ‘invested’ in licenses (e.g. Qualtrics)
  • Piloting protocols as a part-time job for Research Assistants
  • Hours wasted re-doing data preparation (each software has its own outout format, each project is an island)

Reproducibility issues

  • Anyone knows why this 2012 paradigm is not running?
  • Location and organization of projects
  • Data preparation/analyses so ugly, sharing them is not something you can do immediately (let me clean up this a bit before sending it…)
  • Idiosincratic analyses, some of which require licensed closed software (SPSS, Matlab,…)

Our wish list

  • Open source software based on standard technologies
  • Based on mature project/technologies
  • Reusable tasks (my project tasks feed future projects)
  • Easy to create paradigms
  • Online/offline
  • Balancing participants

jsPsychR

The team

Initial idea: Gorka

Current developers: Gorka Navarrete y Herman Valencia

Initial development: @nicomero, @Fethrblaka, @nik0lai

Discussions, ideas, testing:

  • Esteban Hurtado

  • Alvaro Rivera

  • Juan Pablo Morales

What is jsPsychR



jsPsychR is a group of open source tools to help create experimental paradigms with jsPsych, simulate participants and standardize the data preparation and analysis.

Goal

We aim to have a big catalog of tasks available to use in the jsPsychMaker repo. Each of the tasks should run with jspsychMonkeys to create virtual participants. And each task will have a sister script in jsPsychHelpeR to fully automate data preparation (re-coding, reversing items, calculating dimensions, etc.).

The final goal is to help you have the data preparation and analysis ready before collecting any real data, drastically reducing errors in your protocols, and making the move towards registered reports easier.

jsPsychR Tools

The present

  • 3 main R packages (jsPsychMakeR, jsPsychMonkeys, jsPsychHelpeR)
  • 1 R package for Administration tasks (jsPsychAdmin)
  • >80 pages manual
  • ~100 tasks ready (with maker and helper scripts, plus original paper)
  • > 30 online protocols + an unknown number of offline (lab) protocols
  • > 5000 participants in online protocols
  • Used in Chile, Colombia, Spain
  • 2 publications using the system + more in the pipeline… (50% Registered reports)

jsPsychMaker

Features jsPsychMaker

  • Fully open source, based on web standards (jsPsych)
  • Reuse ~ 100 tasks
  • Online and offline protocols
  • Balancing of participants to between participants conditions
  • Easy to create new tasks
  • Full control over order or tasks (randomization, etc.)
  • Participants can continue where they left (or not)
  • Time and number of participants limits
  • Multilingual support (for a selected number of tasks)
  • All the parameters can be quickly change editing a single file

Available tasks

Create New Tasks

jsPsychMaker::copy_example_tasks(
  destination_folder = "~/Downloads/ExampleTasks"
  )

Create protocol

Create a protocol with three existing tasks:

jsPsychMaker::create_protocol(
  canonical_tasks = c("AIM", "EAR", "IRI"),
  folder_tasks = "~/Downloads/ExampleTasks/",
  folder_output = "~/Downloads/protocol999",
  launch_browser = FALSE
)

jsPsychMonkeys

Features jsPsychMonkeys

  • Fully open source (R, docker, selenium)
  • Online and offline
  • Sequentially and in parallel
  • Get pictures of each screen
  • Store logs to make debugging easier
  • Watch the monkeys as they work for you
  • Random pauses or refreshing to simulare human behavior
  • Set random seed to make the monkey’s behavior consistent

Release monkeys

Release a single Monkey!:

jsPsychMonkeys::release_the_monkeys(
  uid = 1,
  initial_wait = 0,
  wait_retry = 0,
  local_folder_tasks = "~/Downloads/protocol999/",
  open_VNC = TRUE
)

Release a horde of Monkeys in parallel:

jsPsychMonkeys::release_the_monkeys(
  uid = 2:10,
  sequential_parallel = "parallel",
  number_of_cores = 10,
  local_folder_tasks = "~/Downloads/protocol999/",
  open_VNC = FALSE
)

jsPsychHelpeR

Features jsPsychHelpeR

  • Fully open source (R)
  • Get tidy output dataframes for each task, and for the whole protocol
  • Include tests for common issues
  • Functions to help create new tasks correction using a standard template
  • Automatic reports with progress, descriptives, codebook, etc.
  • Create a fully reproducible Docker container with the project’s data preparation and analysis
  • Create a blinded dataframe to be able to perform blinded analyses

jsPsychHelpeR

Create project for data preparation:

jsPsychHelpeR::run_initial_setup(
  pid = 999,
  data_location = "~/Downloads/protocol999/.data/",
  folder = "~/Downloads/jsPsychR999"
  )
jsPsychHelpeR::create_new_task("MultiChoice")
  targets::tar_visnetwork(targets_only = TRUE, label = "time")

targets::tar_make()

Challenge: everything in 3 minutes?

Create protocol, simulate participants and prepare data…

# Full process
rstudioapi::navigateToFile("R/script-full-process.R")


Survey results

Let’s try to download the data, process it and show a report with the results:


# Open ExperimentIssues project
rstudioapi::openProject("../Survey/jsPsychHelpeR-ExperimentIssues/jsPsychHelpeR-ExperimentIssues.Rproj", newSession = TRUE)

# If something fails, we always have the monkeys!
browseURL("../Survey/jsPsychHelpeR-ExperimentIssues/outputs/reports/report_analysis_monkeys.html")

Limitations

  • Very easy to create new scales, and simple tasks, but complex experimental tasks require javascript and HTML knowledge (although there are a good number of examples available)

  • Data preparation for new tasks requires expertise in R

  • Requires access to a server for online tasks

The future

  • Lots of things to do:

  • Experimental tasks

    • Create templates for most common experimental designs
    • Templates for data preparation of common experimental designs
  • More tasks, more translations

  • So far, development based in our needs

  • Upgrade to jsPsych v8 when available

  • Improve, clean, …

So back to Registered reports

  • With jsPsychR protocols are standardized and with (mostly) clean code. Also, less errors!

  • Data preparation is 90% automatic, standardized, and beautiful

  • Super easy to work on analyis before collecting human data

  • Much easier to write up a good analysis plan

  • Sharing protocol, materials, data preparation is trivial (single command)

  • Creating future proof full projects (with Docker) is one command away

Help

  • Javascript programmers

  • R programmers

  • Testers

  • Task creators

References

Rubin, Mark. 2017. “An Evaluation of Four Solutions to the Forking Paths Problem: Adjusted Alpha, Preregistration, Sensitivity Analyses, and Abandoning the Neyman-Pearson Approach.” Review of General Psychology 21 (4): 321–29.
Soderberg, Courtney K, Timothy M Errington, Sarah R Schiavone, Julia Bottesini, Felix Singleton Thorn, Simine Vazire, Kevin M Esterling, and Brian A Nosek. 2021. “Initial Evidence of Research Quality of Registered Reports Compared with the Standard Publishing Model.” Nature Human Behaviour 5 (8): 990–97.

Thanks!



Gorka Navarrete

gorkang@gmail.com

https://fosstodon.org/@gorkang